---
title: Troubleshooting Batch Prediction jobs
description: A list of common issues that occur with Batch Prediction jobs, and how to resolve them.

---

# Troubleshooting {: #troubleshooting }

The following lists some common issues and how to resolve them.

## A job is stuck in `INITIALIZING` {: #a-job-is-stuck-in-initializing }

If using local file intake, make sure you have made a `PUT` request with the scoring data for the job after the initial `POST` request.

DataRobot only processes one job at a time per prediction instance, so your job may be queued behind other jobs. Check the job log for details:

```shell
curl -X GET https://app.datarobot.com/api/v2/batchPredictions/:id/ \
    -H 'Authorization: Bearer <YOUR_KEY>'
```

## A job is stuck in `RUNNING` {: #a-job-is-stuck-in-running }

The job may be running slowly, either because of a slow model or because the scoring data contains errors that the API is trying to identify. You can follow the progress of a job by requesting the job status:

```shell
curl -X GET https://app.datarobot.com/api/v2/batchPredictions/:id/ \
    -H 'Authorization: Bearer <YOUR_KEY>'
```

## A job was `ABORTED` {: #a-job-was-aborted }

When a job is aborted, DataRobot logs the reason to the job status. You can check job status from an individual job URL:

```shell
curl -X GET https://app.datarobot.com/api/v2/batchPredictions/:id/ \
    -H 'Authorization: Bearer <YOUR_KEY>'
```

Or from the listing view of all jobs:

```shell
curl -X GET https://app.datarobot.com/api/v2/batchPredictions/ \
    -H 'Authorization: Bearer <YOUR_KEY>'
```

## `HTTP 406` was returned when uploading a CSV file for local file intake {: #http-406-was-returned-when-uploading-a-csv-file-for-local-file-intake }

You are missing the `Content-Type: text/csv` header.

## `HTTP 422` was returned when uploading a CSV file for local file intake {: #http-422-was-returned-when-uploading-a-csv-file-for-local-file-intake }

You either:

- Already pushed CSV data for this job. To submit new data, create a new job.
- Tried to push CSV data for a job that does not require you to push data (e.g., S3 intake).
- Didn't encode your CSV data in the UTF-8 character set and didn't specify a custom encoding in `csvSettings`.
- Didn't encode your CSV data in the proper CSV format and didn't specify a custom format in `csvSettings`.
- Tried to push an empty file.

In any of the above cases, the response and the job log will contain an explanation.

## Intake stream error due to date format mismatch in Oracle JDBC scoring data  {: #intake-stream-error-due-to-date-format-mismatch-in-oracle-jdbc-scoring-data }

Oracle's DATE type contains a time component, which can cause issues with scoring time series data.

A model trained using the date format `yyyy-mm-dd` can result in an error for Oracle JDBC scoring data due to Oracle's DATE format.

When DataRobot reads dates from Oracle, the dates are returned in the format `yyyy-mm-dd hh:mm:ss` by default. This can cause an error when passed to a model expecting a different format.

Use one of the following workarounds to avoid this issue:

- Train the model using Oracle as the data source to ensure that the time format is the same when scored from Oracle.
- Use the `query` option instead of `table` and `schema` to allow for the use of SQL functions. Oracle's `TO_CHAR` function can be used to parse time columns before the data is scored.

## The network connection broke while uploading a dataset for local file intake {: #the-network-connection-broke-while-uploading-a-dataset-for-local-file-intake }

Create a new job and re-upload the dataset. Failed uploads cannot be resumed and will eventually time out.

## The network connection became unavailable while downloading the scoring data for local file output {: #the-network-connection-became-unavailable-while-downloading-the-scoring-data-for-local-file-output }

Re-download the job again. The scored data is available for 48 hours on the managed AI Platform (SaaS) and for 48 hours (but configurable) on the Self-Managed AI Platform (VPC or on-prem).

## `HTTP 404` was returned while trying to download scored data {: #http-404-was-returned-while-trying-to-download-scored-data }

You either:

- Tried to download the scored data for a job that does not have scored data available for download (e.g., S3 output).
- Started the download before the job had started scoring. In that case, wait until the `download` link becomes available in the job links and try again.

## `HTTP 406` was returned when trying to download scored data {: #http-406-was-returned-when-trying-to-download-scored-data }

Your client sent an `Accept` header that did not include `text/csv`. Either do not send the `Accept` header or include `text/csv` in it.

## `CREATE_TABLE` scoring fails due to unsupported output column name formats {: #create_table-scoring-fails-due-to-unsupported-output-column-name-formats }

You may be using a target database as your output adapter that does not support the way DataRobot generates the <a href="output-format.html">output format</a> column names. Column names such as `name (actual)_PREDICTION` when scoring Time Series models might not be supported with all databases.

To work around this issue, you can utilize the <a href="output-format.html#column-name-remapping">Column Name Remapping</a> functionality to re-write the output column name to some form your target database supports.

For instance, if you want to remove the spaces from a column name, you can make a request adding `columnNamesRemapping` as such:

```json
{
   "deploymentId":"<id>",
   "passthroughColumnsSet":"all",
   "includePredictionStatus":true,
   "intakeSettings":{
      "type":"localFile"
   },
   "outputSettings":{
      "type":"jdbc",
      "dataStoreId":"<id>",
      "credentialId":"<id>",
      "table":"table_name_of_database",
      "schema":"dbo",
      "catalog":"test",
      "statementType":"create_table"
   },
   "columnNamesRemapping":{
      "name (actual)_PREDICTION":"name_actual_PREDICTION"
   }
}
```

## Possible causes for `HTTP 422` on job creation {: #possible-causes-for-http-422-on-job-creation }

These are the possible causes for an `HTTP 422` reply when creating a new Batch Prediction job:

- You sent an unknown job parameter
- You specified a job parameter with an unexpected type or value
- You specified an unknown credential ID in either your intake or output settings
- You are attempting to score from/to the same S3/Azure/GCP URL (not supported)
- You are attempting to ingest data from the **AI Catalog**, but your account does not have access to the **AI Catalog**
- You are attempting to ingest data from the **AI Catalog** and the **AI Catalog** dataset is not snapshotted (required for predictions) or has not been successfully ingested
- You are attempting to use a time series custom model (not currently supported)
- You are attempting to use a traditional time series (ARIMA) model (not currently supported)
- You requested Prediction Explanations for a multiclass or time series project (not currently supported)
- You requested prediction warnings for a project other than a regression project (not currently supported)
- You requested prediction warnings for a project that is not properly configured with prediction boundaries
